61 research outputs found
A Robust Face Recognition Algorithm for Real-World Applications
The proposed face recognition algorithm utilizes representation of local facial regions with the DCT. The local representation provides robustness against appearance variations in local regions caused by partial face occlusion or facial expression, whereas utilizing the frequency information provides robustness against changes in illumination. The algorithm also bypasses the facial feature localization step and formulates face alignment as an optimization problem in the classification stage
Video-based driver identification using local appearance face recognition
In this paper, we present a person identification system for vehicular environments. The proposed system uses face images of the driver and utilizes local appearance-based face recognition over the video sequence. To perform local appearance-based face recognition, the input face image is decomposed into non-overlapping blocks and on each local block discrete cosine transform is applied to extract the local features. The extracted local features are then combined to construct the overall feature vector. This process is repeated for each video frame. The distribution of the feature vectors over the video are modelled using a Gaussian distribution function at the training stage. During testing, the feature vector extracted from each frame is compared to each person’s distribution, and individual likelihood scores are generated. Finally, the person is identified as the one who has maximum joint-likelihood score. To assess the performance of the developed system, extensive experiments are conducted on different identification scenarios, such as closed set identification, open set identification and verification. For the experiments a subset of the CIAIR-HCC database, an in-vehicle data corpus that is collected at the Nagoya University, Japan is used. We show that, despite varying environment and illumination conditions, that commonly exist in vehicular environments, it is possible to identify individuals robustly from their face images. Index Terms — Local appearance face recognition, vehicle environment, discrete cosine transform, fusion. 1
Using Photorealistic Face Synthesis and Domain Adaptation to Improve Facial Expression Analysis
Cross-domain synthesizing realistic faces to learn deep models has attracted
increasing attention for facial expression analysis as it helps to improve the
performance of expression recognition accuracy despite having small number of
real training images. However, learning from synthetic face images can be
problematic due to the distribution discrepancy between low-quality synthetic
images and real face images and may not achieve the desired performance when
the learned model applies to real world scenarios. To this end, we propose a
new attribute guided face image synthesis to perform a translation between
multiple image domains using a single model. In addition, we adopt the proposed
model to learn from synthetic faces by matching the feature distributions
between different domains while preserving each domain's characteristics. We
evaluate the effectiveness of the proposed approach on several face datasets on
generating realistic face images. We demonstrate that the expression
recognition performance can be enhanced by benefiting from our face synthesis
model. Moreover, we also conduct experiments on a near-infrared dataset
containing facial expression videos of drivers to assess the performance using
in-the-wild data for driver emotion recognition.Comment: 8 pages, 8 figures, 5 tables, accepted by FG 2019. arXiv admin note:
substantial text overlap with arXiv:1905.0028
Extending explicit shape regression with mixed feature channels and pose priors
Facial feature detection offers a wide range of applications, e.g. in facial image processing, human computer interaction, consumer electronics, and the entertainment industry. These applications impose two antagonistic key requirements: high processing speed and high detection accuracy. We address both by expanding upon the recently proposed explicit shape regression [1] to (a) allow usage and mixture of different feature channels, and (b) include head pose information to improve detection performance in non-cooperative environments. Using the publicly available “wild” datasets LFW [10] and AFLW [11], we show that using these extensions outperforms the baseline (up to 10% gain in accuracy at 8% IOD) as well as other state-of-the-art methods
How Image Degradations Affect Deep CNN-based Face Recognition?
Face recognition approaches that are based on deep convolutional neural
networks (CNN) have been dominating the field. The performance improvements
they have provided in the so called in-the-wild datasets are significant,
however, their performance under image quality degradations have not been
assessed, yet. This is particularly important, since in real-world face
recognition applications, images may contain various kinds of degradations due
to motion blur, noise, compression artifacts, color distortions, and occlusion.
In this work, we have addressed this problem and analyzed the influence of
these image degradations on the performance of deep CNN-based face recognition
approaches using the standard LFW closed-set identification protocol. We have
evaluated three popular deep CNN models, namely, the AlexNet, VGG-Face, and
GoogLeNet. Results have indicated that blur, noise, and occlusion cause a
significant decrease in performance, while deep CNN models are found to be
robust to distortions, such as color distortions and change in color balance.Comment: 8 pages, 3 figure
A Computer Vision System to Localize and Classify Wastes on the Streets
Littering quantification is an important step for improving cleanliness of
cities. When human interpretation is too cumbersome or in some cases
impossible, an objective index of cleanliness could reduce the littering by
awareness actions. In this paper, we present a fully automated computer vision
application for littering quantification based on images taken from the streets
and sidewalks. We have employed a deep learning based framework to localize and
classify different types of wastes. Since there was no waste dataset available,
we built our acquisition system mounted on a vehicle. Collected images
containing different types of wastes. These images are then annotated for
training and benchmarking the developed system. Our results on real case
scenarios show accurate detection of littering on variant backgrounds
- …